推荐系统已被广泛用于各种领域,例如音乐,电影,电子购物。等等。在大多避免数字化之后,由于流行病而最近达到了技术转折点,使在线销售显着增长,并提供定量的定量性。有关艺术家和艺术品的在线数据。在这项工作中,我们提出了一个基于内容的推荐系统,依靠艺术品和艺术家的上下文元数据的图像。我们收集和注释的艺术品提供了高级和特定于艺术的信息,以创建一个完全独特的数据库,该数据库用于培训我们的模型。有了这些信息,我们在艺术品之间构建了一个接近图。同样,我们使用NLP技术来表征艺术家的实践,并从展览和其他活动历史中提取信息,以在艺术家之间创建近距离图。图形分析的力量使我们能够基于艺术品和艺术家的视觉和上下文信息的结合提供艺术品推荐系统。经过一组艺术专家的评估,与他们的专业评估相比,我们的平均最终评分为75%。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.
translated by 谷歌翻译
问答系统被认为是流行且经常有效的信息在网络上寻求信息的手段。在这样的系统中,寻求信息者可以通过自然语言提出问题来获得对他们的查询的简短回应。交互式问题回答是一种最近提出且日益流行的解决方案,它位于问答和对话系统的交集。一方面,用户可以以普通语言提出问题,并找到对她的询问的实际回答;另一方面,如果在初始请求中有多个可能的答复,很少或歧义,则系统可以将问题交通会话延长到对话中。通过允许用户提出更多问题,交互式问题回答使用户能够与系统动态互动并获得更精确的结果。这项调查提供了有关当前文献中普遍存在的交互式提问方法的详细概述。它首先要解释提问系统的基本原理,从而定义新的符号和分类法,以将所有已确定的作品结合在统一框架内。然后,根据提出的方法,评估方法和数据集/应用程序域来介绍和检查有关交互式问题解答系统的审查已发表的工作。我们还描述了围绕社区提出的特定任务和问题的趋势,从而阐明了学者的未来利益。 GitHub页面的综合综合了本文献研究中涵盖的所有主要主题,我们的工作得到了进一步的支持。 https://sisinflab.github.io/interactive-question-answering-systems-survey/
translated by 谷歌翻译
时间序列异常检测已被认为对现实世界系统的可靠和有效运行至关重要。基于对异常特征的各种假设,已经开发了许多异常检测方法。但是,由于现实世界数据的复杂性质,时间序列中的不同异常通常具有支持不同异常假设的不同曲线。这使得很难找到一个可以始终如一的其他模型的异常检测器。在这项工作中,为了利用不同基本模型的好处,我们提出了一个基于增强学习的模型选择框架。具体而言,我们首先学习了不同异常检测模型的池,然后利用强化学习从这些基本模型中动态选择候选模型。关于现实世界数据的实验表明,就整体绩效而言,提出的策略确实可以超过所有基线模型。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
业务流程偏差是指业务流程执行的子集的现象,以消极或积极的方式偏离{他们的预期或理想的结果。业务流程的偏差执行包括违反合规规则的人,或者欠冲前或超过绩效目标的执行。偏差挖掘涉及通过分析支持业务流程的系统存储的事件日志来揭示揭示异常执行的原因。在本文中,首先通过基于顺序和声明模式模式的特征和它们的组合来研究解释业务流程的偏差问题。然后,通过基于纯数据属性值和数据感知声明规则利用事件日志中的事件日志和迹线的数据属性来进一步提高说明。然后通过用于规则感应的直接和间接方法来提取表征消化的解释。使用来自多个域的实际日志,根据他们准确地区分过程的非偏差和异常执行能力以及决赛的可理解性的能力来评估一系列特征类型和不同形式的决策规则。返回给用户的结果。
translated by 谷歌翻译
乳腺癌是最常见的癌症,并寄存癌症的妇女的最多死亡人数。结合大规模筛查政策的诊断活动的最新进展显着降低了乳腺癌患者的死亡率。然而,病理学家手动检查病理学家的载玻片是麻烦的,耗时的,并且受到显着的和观察者内的变异性。最近,全幻灯片扫描系统的出现授权了病理幻灯片的快速数字化,并启用了开发数字工作流程。这些进步进一步使利用人工智能(AI)来协助,自动化和增强病理诊断。但是AI技术,尤其是深度学习(DL),需要大量的高质量注释数据来学习。构建此类任务特定的数据集造成了几个挑战,例如数据获取级别约束,耗时和昂贵的注释,以及私人信息的匿名化。在本文中,我们介绍了乳腺癌亚型(BRACS)DataSet,一个大队列的注释血清杂环蛋白和eosin(H&E) - 染色的图像,以促进乳房病变的表征。 BRACS包含547个全幻灯片图像(WSIS),并从WSI中提取4539个兴趣区域(ROI)。每个WSI和各自的ROI都是通过三个董事会认证的病理学家的共识注释为不同的病变类别。具体而言,Bracs包括三种病变类型,即良性,恶性和非典型,其进一步亚级分为七个类别。据我们所知,这是WSI和ROI水平的最大的乳腺癌亚型的附带数据集。此外,通过包括被升值的非典型病变,Bracs提供了利用AI更好地理解其特征的独特机会。
translated by 谷歌翻译
Pennylane是用于量子计算机可区分编程的Python 3软件框架。该库为近期量子计算设备提供了统一的体系结构,支持量子和连续变化的范例。 Pennylane的核心特征是能够以与经典技术(例如反向传播)兼容的方式来计算变异量子电路的梯度。因此,Pennylane扩展了在优化和机器学习中常见的自动分化算法,以包括量子和混合计算。插件系统使该框架与任何基于门的量子模拟器或硬件兼容。我们为硬件提供商提供插件,包括Xanadu Cloud,Amazon Braket和IBM Quantum,允许Pennylane优化在公开访问的量子设备上运行。在古典方面,Pennylane与加速的机器学习库(例如Tensorflow,Pytorch,Jax和Autograd)接口。 Pennylane可用于优化变分的量子本素体,量子近似优化,量子机学习模型和许多其他应用。
translated by 谷歌翻译
Many challenging reinforcement learning (RL) problems require designing a distribution of tasks that can be applied to train effective policies. This distribution of tasks can be specified by the curriculum. A curriculum is meant to improve the results of learning and accelerate it. We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning, where a task sequence is created based on the success rate of each task. In this setting, each task is an algorithmically created environment instance with a unique configuration. The algorithm selects the order of tasks that provide the fastest learning for agents. The probability of selecting any of the tasks for the next stage of learning is determined by evaluating its performance score in previous stages. Experiments were carried out in the Partially Observable Grid Environment for Multiple Agents (POGEMA) and Procgen benchmark. We demonstrate that SITP matches or surpasses the results of other curriculum design methods. Our method can be implemented with handful of minor modifications to any standard RL framework and provides useful prioritization with minimal computational overhead.
translated by 谷歌翻译